ABSTRACT
Coronavirus disease 2019 (COVID-19) as a global pandemic causes a massive disruption to social stability that threatens human life and the economy. An effective forecasting system is arguably important to provide an early signal of the risk of COVID-19 infection so that the authorities are ready to protect the people from the worst. However, making a good forecasting model for infection risks in different cities or regions is not an easy task, because it has a lot of influential factors that are difficult to be identified manually. To address the current limitations, we propose a deep graph learning model, called PANDORA, to predict the infection risks of COVID-19, by considering all essential factors and integrating them into a geographical network. The framework uses geographical position relationships and transportation frequency as higher order structural properties formulated by higher order network structures (i.e., network motifs). Moreover, four significant node attributes (i.e., multiple features of a particular area, including climate, medical condition, economy, and human mobility) are also considered. We propose three different aggregators to better aggregate node attributes and structural features, namely, Hadamard, Summation, and Connection. Experimental results over real data show that PANDORA outperforms the baseline methods with higher accuracy and faster convergence speed, no matter which aggregator is chosen. IEEE
ABSTRACT
Misinformation can spread easily in end-to-end encrypted messaging platforms such as WhatsApp where many groups of people are communicating with each other. Approaches to combat misinformation may also differ amongst younger and older adults. In this paper, we investigate how young adults encountered and dealt with misinformation on WhatsApp in private group chats during the first year of the COVID-19 pandemic. To do so, we conducted a qualitative interview study with 16 WhatsApp users who were university students based in the United States. We uncovered three main findings. First, all participants encountered misinformation multiple times a week in group chats, often attributing the source of misinformation to be well-intentioned family members. Second, although participants were able to identify misinformation and fact-check using diverse methods, they often remained passive to avoid negatively impacting family relations. Third, participants agreed that WhatsApp bears a responsibility to curb misinformation on the platform but expressed concerns about its ability to do so given the platform's steadfast commitment to content privacy. Our findings suggest that conventional content moderation techniques used by open platforms such as Twitter and Facebook are unfit to tackle misinformation on WhatsApp. We offer alternative design suggestions that take into consideration the social nuances and privacy commitments of end-to-end encrypted group chats. Our paper also contributes to discussions between platform designers, researchers, and end users on misinformation in privacypreserving environments more broadly. © 2022 by The USENIX Association. All Rights Reserved.